This study examines the comparative effectiveness of automated Feedback from AI-driven language assessment tools and traditional Human Feedback in improving language learning outcomes. As artificial intelligence (AI) becomes increasingly integrated into educational technology, understanding its role in supporting language learners is critical. To address this, we conducted a 12-week controlled experiment with 80 EFL (English as a Foreign Language) learners, dividing participants into two groups: one receiving Feedback from an AI-driven tool and the other from Human instructors. Language proficiency improvements were assessed in grammar, vocabulary, and pronunciation. Findings reveal that both Feedback methods significantly enhanced language skills. Human Feedback proved superior in fostering gains in pronunciation and contextual understanding, while AI-driven Feedback excelled in detecting and correcting grammatical errors. These results suggest that a hybrid model, blending AI and Human Feedback, could provide the most effective support for language learners. This research enriches the literature on AI in education and offers practical insights for incorporating AI tools into language learning curricula.